DTE AICCOMAS 2025

Keynote

Combining Graph Networks and Reinforcement Learning for Consistent Turbulence Modeling

  • Kurz, Marius (Centrum Wiskunde & Informatica)
  • Sanderse, Benjamin (Centrum Wiskunde & Informatica)

Please login to view abstract download link

In recent years it has become widely recognized that training machine learning (ML) models for turbulence modeling in large eddy simulation (LES) in an offline manner does not automatically lead to accurate and stable results if applied in actual simulations (Sanderse et al., 2024). One of the approaches to tackle this discrepancy between a priori and a posteriori performance is reinforcement learning (RL), where the ML model is trained directly within the simulation environment (Kurz et al., 2023). Another shortcoming of data-driven models is that they do in general not adhere to the constraints and symmetries of the underlying physics. While numerical methods for computational fluid dynamics are typically designed to obey for instance the rotational and translational symmetries of the governing equations, data-driven turbulence models oftentimes violate these properties or fulfill them only approximately, thus yielding physically inconsistent results. This work demonstrates how RL and graph neural networks (GNN) (Kipf & Welling, 2017) can be employed to address those issues to improve the stability and physical consistency of data-enhanced modeling approaches in practical LES. However, both RL and GNN introduce additional layers of complexity to the modeling problem. Hence, special emphasis is placed on the additional challenges and the practical problems these approaches introduce for LES modeling. This includes in particular the computational cost of the training process and the additional complexities both on the methodological and the implementation side associated with online RL training.